7 research outputs found

    Caching-based Multicast Message Authentication in Time-critical Industrial Control Systems

    Full text link
    Attacks against industrial control systems (ICSs) often exploit the insufficiency of authentication mechanisms. Verifying whether the received messages are intact and issued by legitimate sources can prevent malicious data/command injection by illegitimate or compromised devices. However, the key challenge is to introduce message authentication for various ICS communication models, including multicast or broadcast, with a messaging rate that can be as high as thousands of messages per second, within very stringent latency constraints. For example, certain commands for protection in smart grids must be delivered within 2 milliseconds, ruling out public-key cryptography. This paper proposes two lightweight message authentication schemes, named CMA and its multicast variant CMMA, that perform precomputation and caching to authenticate future messages. With minimal precomputation and communication overhead, C(M)MA eliminates all cryptographic operations for the source after the message is given, and all expensive cryptographic operations for the destinations after the message is received. C(M)MA considers the urgency profile (or likelihood) of a set of future messages for even faster verification of the most time-critical (or likely) messages. We demonstrate the feasibility of C(M)MA in an ICS setting based on a substation automation system in smart grids.Comment: For viewing INFOCOM proceedings in IEEE Xplore see https://ieeexplore.ieee.org/abstract/document/979676

    Versioning, integrity and access control for collaborative applications over hosted data

    No full text
    The objective of this thesis is to design a suite of techniques to facilitate the storage and manipulation of mutable content over untrusted storage (cloud/hosted) services in a more secure and efficient manner. We consider the storage service to be untrusted either because they are typically administered by a third party (as with data outsourcing); or because, even if administered by the data owner, the Byzantine behavior of the storage service due to faults, bugs or attacks cannot be discounted. The security of stored data is a widely acknowledged concern. This thesis primarily focuses on the classic CIA security triad - Confidentiality, Integrity, and Availability. A critical sore point with security mechanisms is their associated overheads, and an important challenge in addition to the functional correctness of the security mechanisms is their efficiency. Thus, this thesis explores data structures and algorithms which enable efficient yet secure primitives for outsourcing of data storage, while supporting mutable and versioned content (as opposed to just static or append-only data). This can ensure that feature-rich applications, such as collaborative and social applications, can be realized by leveraging on the proposed security techniques. We focus first on the integrity of data, which can then be readily used to also ascertain availability; second, we present techniques which incorporate elements of confidentiality; and finally, we focus on the consistency of the data shared among collaborators. For the purposes of this thesis, it is assumed that the collaborators are trusted. In reality, determining who to trust and provide access to a given set of data can be a challenging problem; however this is beyond the scope of the presented work.Doctor of Philosophy (SCE

    Two-factor authentication for trusted third party free dispersed storage

    No full text
    We propose a trusted third party free protocol for secure (in terms of content access, manipulation, and confidentiality) data storage and multi-user collaboration over an infrastructure of untrusted storage servers. It is achieved by the application of data dispersal, encryption as well as two-factor (knowledge and possession) based authentication and access control techniques so that unauthorized parties (attackers) or a small set of colluding servers cannot gain access to the stored data. The protocol design takes into account usability issues as opposed to the closest prior work Esiner and Datta (2016). We explore the security implications of the proposed model with event tree analysis and report on experiment results to demonstrate the practicality of the approach concerning computational overheads. Given that the protocol does not rely on any trusted third party, and most operations including actual collaboration do not require users to be online simultaneously, it is suitable not only for traditional multi-cloud setups but also for edge/fog computing environments.Accepted versio

    On query result integrity over encrypted data

    No full text
    We leverage on authenticated data structures to guarantee correctness and completeness of query results over encrypted data. Our contribution is in bridging two independent lines of work (searchable encryption, and provable data possession) resulting in a general purpose technique, which does so without increasing the client storage overhead, while only a small token and a data structure is added to the server side (in comparison to a base searchable encryption without mechanisms for determining result integrity), where the data structure can simultaneously also be used for integrity checks on the stored data.Accepted versio

    Layered Security for Storage at the Edge: On Decentralized Multi-factor Access Control

    No full text
    In this paper we propose a protocol that allows end-users in a decentralized setup (without requiring any trusted third party) to protect data shipped to remote servers using two factors - knowledge (passwords) and possession (a time based one time password generation for authentication) that is portable. The protocol also supports revocation and recreation of a new possession factor if the older possession factor is compromised, provided the legitimate owner still has a copy of the possession factor. Furthermore, akin to some other recent works, our approach naturally protects the outsourced data from the storage servers themselves, by application of encryption and dispersal of information across multiple servers. We also extend the basic protocol to demonstrate how collaboration can be supported even while the stored content is encrypted, and where each collaborator is still restrained from accessing the data through a multi-factor access mechanism. Such techniques achieving layered security is crucial to (opportunistically) harness storage resources from untrusted entities.MOE (Min. of Education, S’pore)Accepted versio

    FlexDPDP: FlexList-based Optimized Dynamic Provable Data Possession

    Get PDF
    With popularity of cloud storage, efficiently proving the integrity of data stored at an untrusted server has become significant. Authenticated Skip Lists and Rank-based Authenticated Skip Lists (RBASL) have been used in cloud storage to provide support for provable data update operations. In a dynamic file scenario, an RBASL falls short when updates are not proportional to a fixed block size; such an update to the file, however small, may translate to O(n) many block updates to the RBASL, for a file with n blocks. To overcome this problem, we introduce FlexList: Flexible Length-Based Authenticated Skip List. FlexList translates even variable-size updates to O(u) insertions, removals, or modifications, where u is the size of the update divided by the block size. We present various optimizations on the four types of skip lists (regular, authenticated, rank-based authenticated, and FlexList). We compute one single proof to answer multiple (non-)membership queries and obtain efficiency gains of 35%, 35 % and 40 % in terms of proof time, energy, and size, respectively. We also deployed our implementation of FlexDPDP (DPDP with FlexList instead of RBASL) on PlanetLab, demonstrating that FlexDPDP performs comparable to the most efficient static storage scheme (PDP), while providing dynamic data support
    corecore